7 research outputs found

    The role of modelling and computer simulations at various levels of brain organisation

    Get PDF
    Computational modelling and simulations are critical analytical tools in contemporary neuroscience. Models at various levels of abstraction, corresponding to levels of organisation of the brain, attempt to capture different neuronal or cognitive phenomena. This thesis discusses several models and applies them to behavioural and electrophysiological data. First, we model a voluntary decision process in a task where two available options carry the same probability of a reward for the outcome. Trial-by-trial accumulation rates are modulated by single-trial EEG features. Hierarchical Bayesian parameter estimation shows that the probability of reward is associated with changes in the speed of accumulation of evidence. Second, we use a pairwise Maximum Entropy Model (pMEM) to quantify irregularities in the MEG resting-state networks between juvenile myoclonic epilepsy (JME) patients and healthy controls. The JME group exhibited on average fewer local minima of the pMEM energy landscape than controls in the fronto-parietal network. Our results show the pMEM to be descriptive, generative model for characterising atypical functional network properties in brain disorders. Next, we use a hierarchical drift-diffusion model (HDDM) to study the integration of information from multiple sources. We observe a non-perfect integration in the case of the accumulation of both congruent and incongruent evidence. Based on fitting the HDDM parameters, we hypothesise about the neuronal implementation by extending a biologically plausible neuronal mass model of decision making. Finally, we propose a spiking neuron model that unifies various components of inferential decision-making systems. The model includes populations corresponding to anatomical regions, e.g. the dorsolateral prefrontal cortex, orbitofrontal cortex, and basal ganglia. It consists of 8000 neurons and realises dedicated cognitive operations such as weighted valuation of inputs, competition between potential actions, and urgency-mediated modulation. Overall, this work paves the way for closer integration of theoretical models with behavioural and neuroimaging dat

    Energy landscape of resting magnetoencephalography reveals frontoparietal network impairments in epilepsy

    Get PDF
    Juvenile myoclonic epilepsy (JME) is a form of idiopathic generalized epilepsy. It is yet unclear to what extent JME leads to abnormal network activation patterns. Here, we characterised statistical regularities in MEG resting-state networks and their differences between JME patients and controls, by combining a pairwise maximum entropy model (pMEM) and novel energy landscape analyses for MEG. First, we fitted the pMEM to the MEG oscillatory power in the frontoparietal network (FPN) and other resting-state networks, which provided a good estimation of the occurrence probability of network states. Then, we used energy values derived from the pMEM to depict an energy landscape, with a higher energy state corresponding to a lower occurrence probability. JME patients showed fewer local energy minima than controls and had elevated energy values for the FPN within the theta, beta and gamma-bands. Furthermore, simulations of the fitted pMEM showed that the proportion of time the FPN was occupied within the basins of energy minima was shortened in JME patients. These network alterations were highlighted by significant classification of individual participants employing energy values as multivariate features. Our findings suggested that JME patients had altered multi-stability in selective functional networks and frequency bands in the frontoparietal cortices

    Cohort selection for clinical trials from longitudinal patient records: text mining approach

    Get PDF
    Background: Clinical trials are an important step in introducing new interventions into clinical practice by generating data on their safety and efficacy. Clinical trials need to ensure that participants are similar so that the findings can be attributed to the interventions studied and not some other factors. Therefore, each clinical trial defines eligibility criteria, which describe characteristics that must be shared by the participants. Unfortunately, the complexities of eligibility criteria may not allow them to be translated directly into readily executable database queries. Instead, they may require careful analysis of the narrative sections of medical records. Manual screening of medical records is time consuming, thus negatively affecting the timeliness of the recruitment process. Objective: The Track 1 of the 2018 National NLP Clinical Challenge (n2c2) focused on the task of cohort selection for clinical trials with the aim of answering the following question: 'Can natural language processing be applied to narrative medical records to identify patients who meet eligibility criteria for clinical trials?' The task required the participating systems to analyze longitudinal patient records to determine if the corresponding patients met the given eligibility criteria. This article describes a system developed to address this task. Methods: Our system consists of 13 classifiers, one for each eligibility criterion. All classifiers use a bag-of-words document representation model. To prevent the loss of relevant contextual information associated with such representation, a pattern matching approach is used to extract context-sensitive features. They are embedded back into the text as lexically distinguishable tokens, which will consequently be featured in the bag-of-words representation. Supervised machine learning was chosen wherever a sufficient number of both positive and negative instances were available to learn from. A rule–based approach focusing on a small set of relevant features was chosen for the remaining criteria. Results: The system was evaluated using micro-averaged F–measure. Four machine algorithms, including support vector machine, logistic regression, naïve Bayesian classifier and gradient tree boosting, were evaluated on the training data using 10–fold cross-validation. Overall, gradient tree boosting demonstrated the most consistent performance. Its performance peaked when oversampling was used to balance the training data. Final evaluation was performed on previously unseen test data. On average, the F-measure of 89.04% was comparable to three of the top ranked performances in the shared task (91.11%, 90.28% and 90.21%). With F-measure of 88.14%, we significantly outperformed these systems (81.03%, 78.50% and 70.81%) in identifying patients with advanced coronary artery disease. Conclusions: The holdout evaluation provides evidence that our system was able to identify eligible patients for the given clinical trial with high accuracy. Our approach demonstrates how rule-based knowledge infusion can improve the performance of machine learning algorithms even when trained on a relatively small dataset

    Breaking deadlocks: reward probability and spontaneous preference shape voluntary decisions and electrophysiological signals in humans

    Get PDF
    Choosing between equally valued options is a common conundrum, for which classical decision theories predicted a prolonged response time (RT). This contrasts with the notion that an optimal decision maker in a stable environment should make fast and random choices, as the outcomes are indifferent. Here, we characterize the neurocognitive processes underlying such voluntary decisions by integrating cognitive modelling of behavioral responses and EEG recordings in a probabilistic reward task. Human participants performed binary choices between pairs of unambiguous cues associated with identical reward probabilities at different levels. Higher reward probability accelerated RT, and participants chose one cue faster and more frequent over the other at each probability level. The behavioral effects on RT persisted in simple reactions to single cues. By using hierarchical Bayesian parameter estimation for an accumulator model, we showed that the probability and preference effects were independently associated with changes in the speed of evidence accumulation, but not with visual encoding or motor execution latencies. Time-resolved MVPA of EEG-evoked responses identified significant representations of reward certainty and preference as early as 120 ms after stimulus onset, with spatial relevance patterns maximal in middle central and parietal electrodes. Furthermore, EEG-informed computational modelling showed that the rate of change between N100 and P300 event-related potentials modulated accumulation rates on a trial-by-trial basis. Our findings suggest that reward probability and spontaneous preference collectively shape voluntary decisions between equal options, providing a mechanism to prevent indecision or random behavior

    Helmholtz principle on word embeddings for automatic document segmentation

    No full text
    Automatic document segmentation gets more and more attention in the natural language processing field. The problem is defined as text division into lexically coherent fragments. In fact, most of realistic documents are not homogeneous, so extracting underlying structure might increase performance of various algorithms in problems like topic recognition, document summarization, or document categorization. At the same time recent advances in word embedding procedures accelerated development of various text mining methods. Models such as word2vec, or GloVe allow for efficient learning a representation of large textual datasets and thus introduce more robust measures of word similarities. This study proposes a new document segmentation algorithm combining the idea of embedding-based measure of relation between words with Helmholtz Principle for text mining. We compare two of the most common word embedding models and show improvement of our approach on a benchmark dataset
    corecore